Measurement for Improvement

HLTH 3024 Safety and Quality in Paramedicine

The Model for Improvement (MFI)

Framework Overview

Developed by Associates for Process Improvement, the MFI provides a structured approach to improvement activities. It consists of three fundamental questions followed by the Plan-Do-Study-Act (PDSA) cycle.

  1. What are we trying to accomplish? (Aim)
  2. How will we know that a change is an improvement? (Measurement)
  3. What change can we make that will result in improvement? (Change Ideas)
PDSA Cycle:
  • Plan: Objective, predictions, plan for data collection.
  • Do: Carry out the plan, document observations.
  • Study: Analyse data, compare to predictions.
  • Act: Adapt, Adopt, or Abandon the change.

Measurement Terminology

Understanding the distinction between these terms is critical for accurate quality measurement.

Term Definition Example (Paramedicine)
Quality Indicator (QI) A measurable element for which there is evidence/consensus that it can assess quality. It indicates quality but is not a direct measure of it. Patients with suspected STEMI receive 12-lead ECG.
Review Criterion A systematically developed statement used to determine retrospectively if a specific act of care occurred. Was a 12-lead ECG performed within 10 minutes of arrival?
Measure The expression of an indicator as a number (rate, ratio, percentage, mean). The percentage of STEMI patients who received a 12-lead ECG within 10 minutes.
Standard The level of compliance required.
Target Standard: Prospective goal (e.g., 95%).
Achieved Standard: Retrospective result (e.g., 88%).
90% of eligible patients should receive aspirin.

Donabedian's Model: Categorising Indicators

The Triad of Quality Assessment

Avedis Donabedian proposed that quality can be assessed by measuring three categories. Valid process and structure indicators must have a proven link to outcomes.

1. Structure

"The capacity to provide care."

Attributes of the setting (staff, equipment, policies).

Example: Number of ambulances equipped with 12-lead ECG machines.

2. Process

"What is actually done."

Interactions between patients and providers (diagnosis, treatment).

Example: Proportion of STEMI patients receiving aspirin.

3. Outcome

"The effect on health status."

The consequences of care (mortality, recovery, satisfaction).

Example: 30-day survival rate after cardiac arrest.

Attributes of a Good Indicator

Not all data constitutes a good indicator. High-quality indicators must possess specific characteristics (Mainz, 2003):

  • Validity: Does it actually measure what it is intended to measure? Does it reflect the quality of care?
  • Reliability: Does it produce consistent results when measured repeatedly in the same circumstances?
  • Sensitivity: Can it detect changes in the quality of care?
  • Feasibility: Is the data available and affordable to collect?
  • Acceptability: Is it accepted by the clinicians and stakeholders as a fair measure?
  • Evidence-based: Is there scientific evidence linking the indicator to better health outcomes?

Run Charts vs. SPC Charts

1. Run Charts

A simple graph of data plotted over time with a median (middle value) line.

Use: Good for identifying trends or shifts in data over time. Simple to create and interpret.

2. Statistical Process Control (SPC) Charts

Also known as Shewhart Charts. Similar to run charts but use a mean (average) line and include Upper and Lower Control Limits (UCL/LCL).

Control Limits (Sigma Limits): Usually set at ±3 standard deviations (sigma) from the mean. They define the borders of common cause variation (expected random variation).

Use: To determine if a process is stable (predictable) or unstable (unpredictable), and to differentiate between common and special cause variation.

Interpreting Charts: Rules for Signals

These rules indicate non-random signals (Special Cause Variation) in SPC charts:

  • Shift: ≥8 consecutive points on one side of the centre line.
  • Trend: ≥6 consecutive points steadily increasing or decreasing.
  • Astronomical Point: A single data point clearly outside the control limits (UCL/LCL).
  • Too Many/Too Few Runs: A run is a series of points on one side of the median. Statistical tables define if the number of crossings is non-random.

Understanding Variation

Variation is present in all systems. Understanding the type of variation is crucial for appropriate management action.

Type Description Chart Characteristic Management Strategy
Common Cause Variation Inherent, natural fluctuation in a stable system. Predictable within limits. "Noise". Points fluctuate randomly but stay inside the control limits. No shifts/trends. Do NOT react to individual points. To improve, you must fundamentally redesign the system (e.g., new equipment, new policy).
Special Cause Variation Variation due to a specific, identifiable external factor. Unpredictable. "Signal". Points outside control limits (Astronomical), Shifts, or Trends. Investigate immediately. Identify the specific cause (e.g., broken machine, new staff member) and eliminate or replicate it.
The Error of Tampering: Treating common cause variation as special cause (e.g., reacting to every monthly dip in performance as a crisis) increases variation and instability in the system.